Search Results: "Matthew Palmer"

16 November 2014

Matthew Palmer: A benefit of running an alternate init in Debian Jessie

If you re someone who doesn t like Debian s policy of automatically starting on install (or its heinous cousin, the RUN or ENABLE variable in /etc/default/<service>), then running an init system other than systemd should work out nicely.

15 October 2014

Matthew Palmer: My entry in the "Least Used Software EVAH" competition

For some reason, I seem to end up writing software for very esoteric use-cases. Today, though, I think I ve outdone myself: I sat down and wrote a Ruby library to get and set process resource limits those things that nobody ever thinks about except when they run out of file descriptors. I didn t even have a direct need for it. Recently I was grovelling through the EventMachine codebase, looking at the filehandle limit code, and noticed that the pure-ruby implementation didn t manipulate filehandle limits. I considered adding it, then realised that there wasn t a library available to do it. Since I haven t berked around with FFI for a while, I decided to write rlimit. Now to find the time to write that patch for EventMachine Since I doubt there are many people who have a burning need to manipulate rlimits in Ruby, this gem will no doubt sit quiet and undisturbed in the dark, dusty corners of rubygems.org. However, for the three people on earth who find this useful: you re welcome.

30 August 2014

Matthew Palmer: Chromium tabs crashing and not rendering correctly?

If you ve noticed your chrome/chromium on Linux having problems since you upgraded to somewhere around version 35/36, you re not alone. Thankfully, it s relatively easy to workaround. It will hit people who keep their browser open for a long time, or who have lots of tabs (or if you re like me, and do both). To tell if you re suffering from this particular problem, crack open your ~/.xsession-errors file (or wherever your system logs stdout/stderr from programs running under X), and look for lines that look like this:
[22161:22185:0830/124533:ERROR:shared_memory_posix.cc(231)]
Creating shared memory in /dev/shm/.org.chromium.Chromium.gFTQSy
failed: Too many open files
And
[22161:22185:0830/124601:ERROR:host_shared_bitmap_manager.cc(122)]
Cannot create shared memory buffer
If you see those errors, congratulations! The rest of this blog post will be of use to you. There s probably a myriad of bugs open about this problem, but the one I found was #367037: Shared memory-related tab crash. It turns out there s a file handle leak in the chromium codebase somewhere, relating to shared memory handling. There s no fix available, but the workaround is quite simple: increase the number of files that processes are allowed to have open. System-wide, you can do this by creating a file /etc/security/limits.d/local-nofile.conf, containing this line:
* - nofile 65535
You could also edit /etc/security/limits.conf to contain the same line, if you were so inclined. Note that this will only take effect next time you login, or perhaps even only when you restart X (or, at worst, your entire machine). This doesn t help you if you ve got Chromium already open and you d like to stop it from crashing Right Now (perhaps restarting your machine would be a terrible hardship, causing you to lose your hard-won uptime record), then you can use a magical tool called prlimit. The prlimit syscall is available if you re running a Linux 2.6.36 or later kernel, and running at least glibc 2.13. You ll have a prlimit command line program if you ve got util-linux 2.21 or later. If not, you can use the example source code in the prlimit(2) manpage, changing RLIMIT_CPU to RLIMIT_NOFILE, and then running it like this:
prlimit <PID> 65535 65535
The <PID> argument is taken from the first number in the log messages from .xsession-errors in the example above, it s 22161. And now, you can go back to using your tabs as ersatz bookmarks, like I do.

24 July 2014

Matthew Palmer: First Step with Clojure: Terror

$ sudo apt-get install -y leiningen
[...]
$ lein new scratch
[...]
$ cd scratch
$ lein repl
Downloading: org/clojure/clojure/1.3.0/clojure-1.3.0.pom from repository central at http://repo1.maven.org/maven2
Transferring 5K from central
Downloading: org/sonatype/oss/oss-parent/5/oss-parent-5.pom from repository central at http://repo1.maven.org/maven2
Transferring 4K from central
Downloading: org/clojure/clojure/1.3.0/clojure-1.3.0.jar from repository central at http://repo1.maven.org/maven2
Transferring 3311K from central
[...]
Wait what? lein downloads some random JARs from a website over HTTP1, with, as far as far I can tell, no verification that what I m asking for is what I m getting (has nobody ever heard of Man-in-the-Middle attacks in Maven land?). It downloads a .sha1 file to (presumably) do integrity checking, but that s no safety net if I can serve you a dodgy .jar, I can serve you an equally-dodgy .sha1 file, too (also, SHA256 is where all the cool kids are at these days). Finally, jarsigner tells me that there s no signature on the .jar itself, either. It gets better, though. The repo1.maven.org site is served by the fastly.net2 pseudo-CDN3, which adds another set of points in the chain which can be subverted to hijack and spoof traffic. More routers, more DNS zones, and more servers. I ve seen Debian take a kicking more than once because packages aren t individually signed, or because packages aren t served over HTTPS. But at least Debian s packages can be verified by chaining to a signature made by a well-known, widely-distributed key, signed by two Debian Developers with very well-connected keys. This repository, on the other hand oy gevalt. There are OpenPGP (GPG) signatures available for each package (tack .asc onto the end of the .jar URL), but no attempt was made to download the signatures for the .jar I downloaded. Even if the signature was downloaded and checked, there s no way for me (or anyone) to trust the signature the signature was made by a key that s signed by one other key, which itself has no signatures. If I were an attacker, it wouldn t be hard for me to replace that key chain with one of my own devising. Even ignoring everyone living behind a government- or company-run intercepting proxy, and everyone using public wifi, it s pretty well common knowledge by now (thanks to Edward Snowden) that playing silly-buggers with Internet traffic isn t hard to do, and there s no shortage of evidence that it is, in fact, done on a routine basis by all manner of people. Serving up executable code to a large number of people, in that threat environment, with no way for them to have any reasonable assurance that code is trustworthy, is very disappointing. Please, for the good of the Internet, improve your act, Maven. Putting HTTPS on your distribution would be a bare minimum. There are attacks on SSL, sure, but they re a lot harder to pull off than sitting on public wifi hijacking TCP connections. Far better would be to start mandating signatures, requiring signature checks to pass, and having all signatures chain to a well-known, widely-trusted, and properly secured trust root. Signing all keys that are allowed to upload to maven.org with a maven.org distribution root key (itself kept in hardware and only used offline), and then verifying that all signatures chain to that key, wouldn t be insanely difficult, and would greatly improve the security of the software supply chain. Sure, it wouldn t be perfect, but don t make the perfect the enemy of the good. Cost-effective improvements are possible here. Yes, security is hard. But you don t get to ignore it just because of that, when you re creating an attractive nuisance for anyone who wants to own up a whole passel of machines by slipping some dodgy code into a widely-used package.
  1. To add insult to injury, it appears to ignore my http_proxy environment variable, and the repo1.maven.org server returns plain-text error responses with Content-Type: text/xml. But at this point, that s just icing on the shit cake.
  2. At one point in the past, my then-employer (a hosting provider) blocked Fastly s caching servers from their network because they took down a customer site with a massive number of requests to a single resource, and the incoming request traffic was indistinguishable from a botnet-sourced DDoS attack. The requests were coming from IP space registered to a number of different ISPs, with no distinguishing rDNS (184-106-82-243.static.cloud-ips.com doesn t help me to distinguish between I m a professionally-run distributed proxy and I m a pwned box here to hammer your site into the ground ).
  3. Pretty much all of the new breed of so-called CDNs aren t actually pro-actively distributing content, they re just proxies. That isn t a bad thing, per se, but I rather dislike the far-too-common practice of installing varnish (and perhaps mod_pagespeed, if they re providing advanced capabilities) on a couple of AWS instances, and hanging out your shingle as a CDN. I prefer a bit of truth in my advertising.

23 July 2014

Matthew Palmer: Per-repo update hooks with gitolite

Gitolite is a popular way to manage collections of git repositories entirely from the command line it s configured using configuration stored in a git repo, which is nicely self-referential. Providing per-branch access control and a wide range of addons, it s quite a valuable system. In recent versions (3.6), it added support for configuring per-repository git hooks from within the gitolite-admin repo itself something which previously required directly jiggering around with the repo metadata on the filesystem. It allows you to chain multiple hooks together, too, which is a nice touch. You can, for example, define hooks for validate style guidelines , submit patch to code review and push to the CI server . Then for each repo you can pick which of those hooks to execute. It s neat. There s one glaring problem, though you can only use these chained, per-repo hooks on the pre-receive, post-receive, and post-update hooks. The update hook is special, and gitolite wants to make sure you never, ever forget it. You can hook into the update processing chain by using something called a virtual ref ; they re stored in a separate configuration directory, use a different syntax in the config file, and if you re trying to learn what they do, you ll spend a fair bit of time on them. The documentation describes VREFs as a mechanism to add additional constraints to a push . The association between that and the update hook is one you get to make for yourself. The interesting thing is that there s no need for this gratuitous difference in configuration methods between the different hooks. I wrote a very small and simple patch that makes the update hook configurable in exactly the same way as the other server-side hooks, with no loss of existing functionality. The reason I m posting it here is that I tried to submit it to the primary gitolite developer, and was told I m not touching the update hook [ ] I m not discussing this [ ] take it or leave it . So instead, I m publicising this patch for anyone who wants to locally patch their gitolite installation to have a consistent per-repo hook UI. Share and enjoy!

8 July 2014

Matthew Palmer: Doing Password Complexity Wrong

I just made an account on yet another web service. On the suggestion of my password manager, I attempted to use the password W:9[$X*F . It was rejected because Password must contain at least one non-alphabet character, one lowercase letter, one uppercase letter . OK, how about Passw0rd ? Yep, that s fine. Anyone want to guess which of those two passwords is going to fall victim to a brute-force attack first? Go on, don t be shy, take a wild shot in the dark!

6 July 2014

Matthew Palmer: Witness the security of this fully DNSSEC-enabled zone!

After dealing with the client side of the DNSSEC puzzle last week, I thought it behooved me to also go about getting DNSSEC going on the domains I run DNS for. Like the resolver configuration, the server side work is straightforward enough once you know how, but boy howdy are there some landmines to be aware of. One thing that made my job a little less ordinary is that I use and love tinydns. It s an amazingly small and simple authoritative DNS server, strong in the Unix tradition of do one thing and do it well . Unfortunately, DNSSEC is anything but small and simple and so tinydns doesn t support DNSSEC out of the box. However, Peter Conrad has produced a patch for tinydns to do DNSSEC, and that does the trick very nicely. A brief aside about tinydns and DNSSEC, if I may Poor key security is probably the single biggest compromise vector for crypto. So you want to keep your keys secure. A great way to keep keys secure is to not put them on machines that run public-facing network services (like DNS servers). So, you want to keep your keys away from your public DNS servers. A really great way of doing that would be to have all of your DNS records somewhere out of the way, and when they change regenerate the zone file, re-sign it, and push it out to all your DNS servers. That happens to be exactly how tinydns works. I happen to think that tinydns fits very nicely into a DNSSEC-enabled world. Anyway, back to the story. Once I d patched the tinydns source and built updated packages, it was time to start DNSSEC-enabling zones. This breaks down into a few simple steps:
  1. Generate a key for each zone. This will produce a private key (which, as the name suggests, you should keep to yourself), a public key in a DNSKEY DNS record, and a DS DNS record. More on those in a minute. One thing to be wary of, if you re like me and don t want or need separate Key Signing and Zone Signing keys. You must generate a Key Signing key this is a key with a flags value of 257. Doing this wrong will result in all sorts of odd-ball problems. I wanted to just sign zones, so I generated a Zone Signing key, which has a flags value of 256. Big mistake. Also, the DS record is a hash of everything in the DNSKEY record, so don t just think you can change the 256 to a 257 and everything will still work. It won t.
  2. Add the key records to the zone data. For tinydns, this is just a matter of copying the zone records from the generated key into the zone file itself, and adding an extra pseudo record (it s all covered in the tinydnssec howto).
  3. Publish the zone data. Reload your BIND config, run tinydns-sign and tinydns-data then rsync, or do whatever it is PowerDNS people do (kick the database until replication starts working again?).
  4. Test everything. I found the Verisign Labs DNSSEC Debugger to be very helpful. You want ticks everywhere except for where it s looking for DS records for your zone in the higher-level zone. If there are any other freak-outs, you ll want to fix those because broken DNSSEC will take your domain off the Internet in no time.
  5. Tell the world about your DNSSEC keys. This is simply a matter of giving your DS record to your domain registrar, for them to add it to the zone data for your domain s parent. Wherever you d normally go to edit the nameservers or contact details for your domain, you probably want to do to the same place and look for something about DS or Domain Signer records. Copy and paste the details from the DS record in your zone into there, submit, and wait a minute or two for the records to get published.
  6. Test again. Before you pat yourself on the back, make sure you ve got a full board of green ticks in the DNSSEC Debugger. if anything s wrong, you want to rollback immediately, because broken DNSSEC means that anyone using a DNSSEC-enabled resolver just lost the ability to see your domain.
That s it! There s a lot of complicated crypto going on behind the scenes, and DNSSEC seems to revel in the number of acronyms and concepts that it introduces, but the actual execution of DNSSEC-enabling your domains is quite straightforward.

29 June 2014

Matthew Palmer: Adventures in DNSSEC

DNSSEC is one of those things that have been on the I should poke at that sometime list, but today I decided to actually get in and play around with it. Turns out it s pretty trivially easy to setup, but I got bitten by a few gotchas during testing, that I thought worthwhile to document. Since I like to explain things, a lot of this article isn t how to so much as how does .

Configuring The Resolver First off, you need a resolver1 that you can trust, and which implements DNSSEC validation. In theory, you could use your ISP s resolvers (if they re DNSSEC-enabled), or even Google s public resolvers (8.8.8.8 and 8.8.4.4), but it s really not recommended how much do you trust those, really? More importantly, how much do you trust the network path between them and you? Frankly, running a local on-host caching resolver is trivial, consumes next-to-no resources, and is highly recommended. It s what I did. I used unbound, finally casting free the venerable dnscache, which has served me well for many years. I chose unbound because it fits the do one small thing, and do it well philosophy that I cherish (unlike BIND), and because it s what all the cool kids appear to be using these days. Installing it was as simple as apt-get install unbound it s available in pretty much every distro, I believe. Out of the box, unbound in Debian wheezy comes configured for DNSSEC validation. Other methods of installation might not be so amenable, so just make sure your unbound.conf liiks like this, so that DNSSEC is enabled:
server:
    auto-trust-anchor-file: "/var/lib/unbound/root.key"
    val-log-level: 2
The val-log-level line isn t strictly required, but I found it useful to have unbound log info on validation failures to syslog. Season to taste. You also have to configure your system to actually use your new, shiny local resolver. Editing /etc/resolv.conf would, in an ideal world, work Just Fine, but between Magical Unhelpful Pixies (I m looking at you, NetworkManager) and DHCP, it often isn t that easy. Ultimately, though, you should configure your system so that the only resolver in /etc/resolv.conf is always 127.0.0.1. Anything else is a security hole waiting to happen. If you re using DHCP and a recent version of ISC dhclient, adding supersede domain-name-servers 127.0.0.1; to dhclient.conf should do the needful.

Weighing the (Trust) Anchor Whatever local resolver you choose, you need to tell it about trust anchors . This is a key (or keys) which are trusted to sign all the other data that the resolver will find. Like the rest of the DNS, DNSSEC is hierarchical, and you need to know who to start with before you can start. To resolve a name using DNS, you ask the root servers for the answer to a question (say, what is the address of www.example.com? ). The root server doesn t know the answer, but it knows who to ask, and it tells you where to go (typically, go ask the DNS servers for .com, which are X, Y, and Z ). Then you ask those servers, and they probably don t know, but they know who to ask, and they tell you to go ask those servers. This continues until you finally get a server which says, oh, yes, I know the answer to that! and hands over the information you re after. In DNSSEC, you get digital signatures that prove that the answers you get from each DNS server you ask are legit, and not forged. So when you ask the root servers, what is the address of www.example.com? , they answer, I don t know, but go ask DNS servers at addresses X, Y, and Z and by the way, you can trust this answer is legit because here s this signature you can check . To verify a signature is legit, you need to know that the key used to make the signature is trustworthy, and it wasn t just any ol key that made it similar to how an ink scribble on a page doesn t mean anything unless you know what the signature you re looking at should look like. DNSSEC has two ways to make sure you know who to trust. Firstly, those trust anchors I mentioned three paragraphs ago. These are the keys that the root servers use to sign the responses they send out. You tell unbound (or your local DNSSEC-enabled resolver of choice), if you see a signature provided by this trust anchor key, you can trust it to be legit . The second way to trust is key is to have someone you already trust say, oh, you can trust this key, too . This is done by providing more data in the answer that you get from the root servers. So the full answer you get from the root servers when you ask, what is the address of www.example.com? looks like this:
I don t know, but you should ask the DNS servers at addresses X, Y, and Z. You can trust their answers because they ll sign their responses with key K. Finally, you can trust that I m legit because of this signature I m providing you, generated from key T.
Assuming that T is the trust-anchor key you ve already told unbound to trust, it can then trust all of the information it got from that first question. When it then goes to DNS server X and asks, Hey, do you know the address of www.example.com? , that server can send back its own response saying:
Gee, shucks, I don t know, but DNS servers at P and Q can help you. You can trust them because they ll sign responses with key S. And just to show you I m on the level, check out this nifty signature I made, signed with key K.
Since a server we trust told us to trust key K, and then we got a response signed by key K that told us to trust key S, when we then go and ask server P and get a response signed by S, we re still demonstrably secure. Neat setup, right? Anyway, now you know all about trust anchors. How do you set it up? For unbound, it s capable of priming the trust anchor itself. It does this by retrieving a copy of the trust anchor from [a well-known HTTPS URL]((https://data.iana.org/root-anchors/root-anchors.xml), which provides a reasonable measure of security the trust anchor file should be safe as long as that HTTPS server is trustworthy. For the properly paranoid, you should determine an alternate means of verifying that you have the correct root key (a web search for DNSSEC root key attestation would probably be a good start).

Testing It s always handy to know that something s working as you expect. For something that s important for security, that goes double. So, how do you test that your DNSSEC-capable resolver is, indeed, working as designed? Well, firstly, you can test that the resolver, by itself, is working. This is simple: do a query for a zone known to have broken DNSSEC. Thankfully, some nice people (Comcast, specifically) have already set this up. You can test a resolver for DNSSEC-correctness with this command:
dig @127.0.0.1 www.dnssec-failed.org
If dig gives you back an IP address, something is broken. The correct, my DNSSEC works! response should be a status: SERVFAIL response. That s all well and good, but having a resolver that works isn t much good if nothing s actually using it. Your system configuration has probably got some other resolvers configured into it (such as those provided by a DHCP server), and you ll want to make sure those aren t getting used2. The trivial test is to visit http://www.dnssec-failed.org in your browser. If that comes back with a page, then something s still wrong. A more user-friendly test of your DNSSEC configuration is provided by http://dnssectest.sidnlabs.nl/test.php, which will display a page either way, and give you a tick or a cross depending on whether you re DNSSEC-enabled or not. One thing to be wary of when you re doing browser tests is caching. I spent far too long trying to work out why unbound wasn t working , when in actual fact it was working fine, it s just that both my browser and local caching proxy (squid) were caching DNS requests3. So, even after I d correctly setup unbound, visiting the test sites was providing me with erroneous results. Fixing up the browser was simple enough. In Chromium, visiting chrome://net-internals#dns will show you what s being cached, what resolv.conf settings are being used, and allow you to clear the browser s resolver cache. Squid was similarly simple, although it took me a while to figure out that it was causing me grief (software that just works , being so rare, tends to slip your mind, and so I often forget that squid is actually doing me good). Restarting squid has the appropriate result (clears the DNS cache), but in the end I decided that having another layer of caching wasn t going to help significantly when I ve got an on-host DNS cache already, in unbound. So, I just turned the knobs for DNS caching in squid right down:
negative_dns_ttl 1 second
positive_dns_ttl 1 second
And voila! All was well.

In summary DNSSEC is cool. It s easy to get going with, and over time it will provide more and more practical benefits (I am a DANE fanboi). Get started with it now, and you ll be ahead of the curve.
  1. A resolver is a program which takes a name, does all the DNS queries required to get the data associated with that name, and return it to whoever wanted to know. This can be a library which links to a client application, or it can be a separate program which receives DNS requests itself. There are usually both types of resolvers in use for a typical Linux system to resolve names. On every machine, in every program, there is the so-called stub resolver , which is part of libc, and does nothing except create DNS requests and send them to the addresses listed in resolv.conf. Also present, although not always running on every machine (typically your ISP or network admin has some running for everyone to share) are one or more recursive resolvers , which are separate programs which take DNS requests, do all of the queries themselves, and return the result to the stub resolver for presentation to the client program which made the request in the first place. You will see this material again.
  2. You can configure dhclient to use your local resolver instead of the DHCP-provided ones by simply setting:
    supersede domain-name-servers 127.0.0.1;
    
    in your dhclient.conf file.
  3. It also didn t help that I hadn t correctly knobbled dhclient the first time around, and so even when I thought I was only resolving against my local unbound server, it was still actually using the DHCP-provided resolvers

25 June 2014

Matthew Palmer: Moving forward with an SSL Co-op

Since first posting my idea for an SSL co-op a couple of weeks ago, I ve gotten some positive feedback from people, and further thinking and research has convinced me that it is feasible to at least attempt it. As a result, I d like to announce the public unveiling of The SSL Co-op. It is intended to be a commercial, not-for-profit1 organisation that issues widely-trusted certificates to members, for their use or for resale. Eventually, I d like the co-op to be a root CA in its own right, with its certificate trusted by all the browsers and other X.509-using applications out there, but that isn t something that s achievable immediately. At this stage, the co-op hasn t been formed, and I m looking for expressions of interest from individuals and organisations who would be interested in becoming members. If you fit that description, I d really appreciate it if you could fill out a short survey so I can get a better idea of what sort of scale the co-op will be operating at initially. This is the first step towards an interesting future, where there is more choice of provider for online identity verification. Exciting times.
  1. Despite a lot of misunderstanding to the contrary, commercial, not-for-profit is not a contradiction. Commercial means doing things for money , and not-for-profit means not returning a dividend to investors . In the case of the SSL co-op, it will be providing services to members on a cost-recovery basis, and any excess funds left over from that will be re-invested in the co-op to improve the services provided to members.

22 June 2014

Matthew Palmer: Key Transition Statements: Worthless?

Ten days ago, I blogged about (finally) generating a GPG key transition statement. As the title of this post suggests, I have received zero signatures. Have other people had any success with transition statements? Perhaps it s time to hit up Debian developers I know one-by-one To the IRCz!

12 June 2014

Matthew Palmer: GPG key transition

I m probably the last person on the planet to get around to doing this, but I figure it s probably about time my venerable old 1024D key was put out to pasture. My transition statement (signed, as usual, by both the old and new keys) is available. If you signed my old key, and still trust that I am who I think I am, I d appreciate it if you could sign my new key and fling the signatures at me or the keyserver networks.

5 June 2014

Matthew Palmer: An SSL Certificate Cooperative?

I ve been pondering co-ops, and their value in the world, lately. The thought has come to mind that a co-op which ran a Certificate Authority seems like an absolute no-brainer the members each contribute to the operational costs (servers, software, and the eye-watering auditing expenses) and in return get to issue all the domain-verified certificates they like (and pay reasonable rates for organization-verified certificates, and perhaps even EV certs). I know there are some superficially similar examples of this idea out there already. StartSSL captured my heart with their policy of only charging for what costs money (so you can get free DV certificates for personal use), but they made me sad when they charged for revocations (yes, I know revocations cost money to serve the CRL, but OCSP ain t free either ). CACert tickle my Free Software Fan-bone, being all about the freedom and community involvement, but lose some practicality points on the fact that they ve been trying to get their root certs into the browsers for a long time and haven t really gotten anywhere. So, assuming that DNSSEC (and hence DANE) doesn t become universally available any time soon, leaving the CA business model dead, buried, and worm-eaten, is the idea of a cooperative certificate authority of interest to anyone? Surely anyone who wished to have more control over their SSL certificates than you get with a reseller relationship, but couldn t justify becoming a root certificate holder themselves, would see the value of something of this nature?

3 May 2014

Matthew Palmer: How *not* to do a redirect

This is the entireity of a (purportedly) HTML page I just got:
<script language="javascript">
  window.location = "http://example.com/obscured/to/protect/the/guilty"
</script>
To compound the pain, this didn t come from a site run by people who wouldn t be expected to know any better it s associated with a rather popular web-oriented test framework. So it should contain at least one person who might pipe up and say, WTF, don t do that! . I m up to about 7 things that are wrong with this. Anyone want to weigh in with their own enumeration of why this is shockingly bad?

11 January 2014

Matthew Palmer: Keep your gems clean!

Ruby has a reputation for being slow. But there s slow, and then there s three seconds to show the help for a fairly command line program . That s not slow, that s ridiculous. The gem cleanup command is the solution. Rubygems has a nice feature whereby you can have multiple versions of a gem installed at once. That s neat, because it allows programs with different gem version requirements to co-exist on the system. Unfortunately, over time, you can end up with many versions of a gem installed, as upgrades pull in ever-newer versions of all your gems. Couple this accumulation of cruft with a need for Rubygems to look at every one of them on every startup, and you can see how you could very quickly end up with a three second startup time. By running gem cleanup, you ll have the opportunity to nuke all the out-of-date versions of your gems. It takes into account dependencies, so it will at least ask you before nuking a gem that another gem absolutely depends on (I d prefer a command-line switch to say, Don t even think about uninstalling something that another gem depends on! , but you can t have everything). If you ve got non-gem things on your system that rely on out-of-date gems (anything using bundler, for instance), those will assplode the next time you run them, but you can always re-run bundle install to get just the specific versions of gems back you need. The end result of my little spring cleaning? Down from three seconds to three quarters of a second. A 75% improvement. Win!

20 December 2013

Matthew Palmer: I am officially smarter than the Internet

Yes, the title is just a scootch self-aggrandising, but I m rather chuffed with myself at the moment, so please forgive me. It all started with my phone (a regular Samsung Galaxy S3) suddenly refusing to boot, stuck at the initial splash screen ( Samsung Galaxy SIII GT-I9300 ). After turning it off and on again a few times (I know my basic problem-solving strategies) and clearing the cache, I decided to start looking deeper. In contrast to pretty much every other Android debugging experience ever, I almost immediately found a useful error message in the recovery system:
E:Failed to mount /efs (Invalid Argument)
Excellent! , thought I. An error message. Google will tell me how to fix this! Nope. The combined wisdom of the Internet, distilled from a great many poorly-spelled forum posts, unhelpful blog posts, and thoroughly pointless articles, was simple: You re screwed. Send it back for service. I tried that. Suffice it to say that I will never, ever buy anything from Kogan ever again. I have learnt my lesson. Trying to deal with their support people was an exercise in frustration, and ultimately fruitless. In the end, I decided I d have some fun trying to fix it myself after all, it s a failure at the base Linux level. I know a thing or two about troubleshooting Linux, if I do say so myself. If I really couldn t fix it, I d just go buy a new phone. It turned out be relatively simple. Here s the condensed version of my notes, in case someone wants to follow in my footsteps. If you d like expansion, feel free to e-mail me. Note that these instructions are specifically for my Galaxy S3 (GT-I9300), but should work with some degree of adaptation on pretty much any Android phone, as far as I can determine, within the limits of the phone s willingness to flash a custom recovery.
  1. Using heimdall, flash the TeamWin recovery onto your phone (drop into download mode first hold VolDown+Home+Power):
    heimdall flash --recovery twrp.img
    
  2. Boot into recovery (VolUp+Home+Power), select Advanced -> Terminal , and take an image of the EFS partition onto the external SD card you should have already in the phone:
    dd if=/dev/block/mmcblk0p3 of=/external_sd/efs.img
    
  3. Shutdown the phone, mount the SD card on your computer, then turn your EFS partition image into a loopback device and fsck it:
    sudo losetup -f .../efs.img
    sudo fsck -f /dev/loop0
    
    With a bit of luck, the partition won t be a complete write-off and you ll be able to salvage the contents of the files, if not the exact filesystem structure. Incidentally, if the filesystem was completely stuffed, you could get someone else s EFS partition and change the IMEI and MAC addresses and you d probably be golden, but that would quite possibly be illegal or something, so don t do that.
  4. Now comes the fun part putting the filesystem back together. After fscking, mount the image somewhere on your computer:
    mount /dev/loop0 /mnt
    
    In my case, I had about a dozen files living in lost+found, and I figured that wasn t a positive outcome. I did try, just in case, writing the fsck d filesystem image back to the phone, in the hope that it just needed to mount the filesystem to boot, but no dice. Instead, I had to find out where these lost soul^Wfiles were supposed to live. Luckily, a colleague of mine also has an S3 (the ever-so-slightly-different GT-I9300T), and he was kind enough to let me take a copy of his EFS partition, and use that as a file location template. Using a combination of file sizes, permissions/ownerships, and inode numbers (I knew the -i option to ls would come in handy someday!), I was able to put all the lost files back where they should be.
  5. Unmount all those EFS filesystems, losetup -d /dev/loop0, and put the fixed up EFS partition image back onto your SD card for the return trip to the phone.
  6. Now, with a filesystem image that looks reasonable, it s time to write it back onto the phone and see what happens. Copy it onto the SD card, boot up into recovery again, get a shell, and a bit more dd:
    dd if=/external_sd/efs.img of=/dev/block/mmcblk0p3
    
  7. With a bit of luck, your phone may just boot back up now. In my case, I d done so many other things to my phone trying to get it back up and running (including flashing custom ROMs and what have you) that I needed to flash Cyanogen, boot it, and wait at the boot screen for about 15 minutes (I shit you not, 15 minutes of Gah is my phone going to work?!? ) before it came up and lo! I had a working phone again. And about 27 SMSes. Sigh, back to work
So, yeah, neener-neener to the collected wisdom of the tubes. I fixed my EFS partition, and in the great, grand scheme of things, it wasn t even all that difficult. For any phone which (a) allows you to flash a custom recovery and (b) you can find another of the same model to play with, EFS corruption doesn t necessarily mean a fight with tech support. Incidentally, if you happen to have an S3 exhibiting this problem, but you re not comfortable fiddling with it, I m happy to put your EFS back together again if you pay shipping both ways. It s about a 5 minute job now I know how to do it. E-mail me.

19 December 2013

Matthew Palmer: Truly, nothing is safe

Quoted from a recent Debian Security Advisory:
Genkin, Shamir and Tromer discovered that RSA key material could be extracted by using the sound generated by the computer during the decryption of some chosen ciphertexts.
Side channel attacks are the ones that terrify me the most. You can cryptanalyse the algorithm and audit the implementation as much as you like, and then still disclose key material because your computer makes noise.

15 December 2013

Matthew Palmer: So you think your test suite is comprehensive?

Compare and contrast your practices with those of the SQLite development team, who go so far as to run every test with versions of malloc(2) and I/O syscalls which fail, as well as special VFS layers which reorder and drop writes. I think this sentence sums it all up:
By comparison, the project has 1084 times as much test code and test scripts 91452.5 KSLOC.
One thousand times as much test code as production code. As Q3A says, Impressive .

5 December 2013

Matthew Palmer: The easy bit of software development

I m sure this isn t an original thought of mine, but it just popped into my head and I think it s something of a fundamental truth that all software developers need to keep in mind:
Writing software is easy. The hard part is writing software that works.
All too often, we get so caught up in the rush of building something that we forget that it has to work and, all too often, we fail in some fundamental fashion, whether it s doesn t satisfy the user s needs or you just broke my $FEATURE! (which is the context I was thinking of).

26 November 2013

Matthew Palmer: The Shoe is on the Other Foot

I suppose nobody at Microsoft remembers what it was like when everyone was taking potshots at them, so they ve decided it s fair game to take a cheap shot at the new Evil Empire. That being said, I wouldn t say no to one of those Keep Calm coffee mugs

11 November 2013

Matthew Palmer: Timezones are not optional information

In high school, I had a science teacher who would mark your answers wrong if you forgot to include units. What does that mean? , he would write, 17.2 elephants? The point he was trying to get across was that a bare number, without the relevant units, wasn t precise enough to be useful. Also, carrying the units with you helped to cross-check your work if you got a numeric answer, but the units were garbage (seconds per kilogram, for instance, when you were trying to find an acceleration), then you could be pretty sure you d made a mistake somewhere. Fast forward to today, and I m currently working with a database containing a pile of timestamps. Without timezones. Where the timestamps being inserted are in local time. Thankfully, so far, this particular system has always been run in one timezone, so I ve only got one timezone to deal with, but the potential ramifications of systems with different timezones inserting data into this database terrify me. The naive answer is to just store everything in UTC and be done with it. I m not particularly averse to that solution, as long as it s very clear to everyone what s going on. The correct answer, though, I think, is to always keep timezone information with your timestamps otherwise, you ll never know whether it s 0830 in elephants

Next.

Previous.